82 research outputs found

    Improving High Resolution Histology Image Classification with Deep Spatial Fusion Network

    Full text link
    Histology imaging is an essential diagnosis method to finalize the grade and stage of cancer of different tissues, especially for breast cancer diagnosis. Specialists often disagree on the final diagnosis on biopsy tissue due to the complex morphological variety. Although convolutional neural networks (CNN) have advantages in extracting discriminative features in image classification, directly training a CNN on high resolution histology images is computationally infeasible currently. Besides, inconsistent discriminative features often distribute over the whole histology image, which incurs challenges in patch-based CNN classification method. In this paper, we propose a novel architecture for automatic classification of high resolution histology images. First, an adapted residual network is employed to explore hierarchical features without attenuation. Second, we develop a robust deep fusion network to utilize the spatial relationship between patches and learn to correct the prediction bias generated from inconsistent discriminative feature distribution. The proposed method is evaluated using 10-fold cross-validation on 400 high resolution breast histology images with balanced labels and reports 95% accuracy on 4-class classification and 98.5% accuracy, 99.6% AUC on 2-class classification (carcinoma and non-carcinoma), which substantially outperforms previous methods and close to pathologist performance.Comment: 8 pages, MICCAI workshop preceeding

    A Compact Representation of Histopathology Images using Digital Stain Separation & Frequency-Based Encoded Local Projections

    Full text link
    In recent years, histopathology images have been increasingly used as a diagnostic tool in the medical field. The process of accurately diagnosing a biopsy sample requires significant expertise in the field, and as such can be time-consuming and is prone to uncertainty and error. With the advent of digital pathology, using image recognition systems to highlight problem areas or locate similar images can aid pathologists in making quick and accurate diagnoses. In this paper, we specifically consider the encoded local projections (ELP) algorithm, which has previously shown some success as a tool for classification and recognition of histopathology images. We build on the success of the ELP algorithm as a means for image classification and recognition by proposing a modified algorithm which captures the local frequency information of the image. The proposed algorithm estimates local frequencies by quantifying the changes in multiple projections in local windows of greyscale images. By doing so we remove the need to store the full projections, thus significantly reducing the histogram size, and decreasing computation time for image retrieval and classification tasks. Furthermore, we investigate the effectiveness of applying our method to histopathology images which have been digitally separated into their hematoxylin and eosin stain components. The proposed algorithm is tested on the publicly available invasive ductal carcinoma (IDC) data set. The histograms are used to train an SVM to classify the data. The experiments showed that the proposed method outperforms the original ELP algorithm in image retrieval tasks. On classification tasks, the results are found to be comparable to state-of-the-art deep learning methods and better than many handcrafted features from the literature.Comment: Accepted for publication in the International Conference on Image Analysis and Recognition (ICIAR 2019

    Whole slide image registration for the study of tumor heterogeneity

    Full text link
    Consecutive thin sections of tissue samples make it possible to study local variation in e.g. protein expression and tumor heterogeneity by staining for a new protein in each section. In order to compare and correlate patterns of different proteins, the images have to be registered with high accuracy. The problem we want to solve is registration of gigapixel whole slide images (WSI). This presents 3 challenges: (i) Images are very large; (ii) Thin sections result in artifacts that make global affine registration prone to very large local errors; (iii) Local affine registration is required to preserve correct tissue morphology (local size, shape and texture). In our approach we compare WSI registration based on automatic and manual feature selection on either the full image or natural sub-regions (as opposed to square tiles). Working with natural sub-regions, in an interactive tool makes it possible to exclude regions containing scientifically irrelevant information. We also present a new way to visualize local registration quality by a Registration Confidence Map (RCM). With this method, intra-tumor heterogeneity and charateristics of the tumor microenvironment can be observed and quantified.Comment: MICCAI2018 - Computational Pathology and Ophthalmic Medical Image Analysis - COMPA

    Special issue on microscopic image processing

    Full text link

    GridIMAGE: A Novel Use of Grid Computing to Support Interactive Human and Computer-Assisted Detection Decision Support

    Get PDF
    This paper describes a Grid-aware image reviewing system (GridIMAGE) that allows practitioners to (a) select images from multiple geographically distributed digital imaging and communication in medicine (DICOM) servers, (b) send those images to a specified group of human readers and computer-assisted detection (CAD) algorithms, and (c) obtain and compare interpretations from human readers and CAD algorithms. The currently implemented system was developed using the National Cancer Institute caGrid infrastructure and is designed to support the identification of lung nodules on thoracic computed tomography. However, the infrastructure is general and can support any type of distributed review. caGrid data and analytical services are used to link DICOM image databases and CAD systems and to interact with human readers. Moreover, the service-oriented and distributed structure of the GridIMAGE framework enables a flexible system, which can be deployed in an institution (linking multiple DICOM servers and CAD algorithms) and in a Grid environment (linking the resources of collaborating research groups). GridIMAGE provides a framework that allows practitioners to obtain interpretations from one or more human readers or CAD algorithms. It also provides a mechanism to allow cooperative imaging groups to systematically perform image interpretation tasks associated with research protocols

    Comparing computer-generated and pathologist-generated tumour segmentations for immunohistochemical scoring of breast tissue microarrays

    Get PDF
    BACKGROUND: Tissue microarrays (TMAs) have become a valuable resource for biomarker expression in translational research. Immunohistochemical (IHC) assessment of TMAs is the principal method for analysing large numbers of patient samples, but manual IHC assessment of TMAs remains a challenging and laborious task. With advances in image analysis, computer-generated analyses of TMAs have the potential to lessen the burden of expert pathologist review. METHODS: In current commercial software computerised oestrogen receptor (ER) scoring relies on tumour localisation in the form of hand-drawn annotations. In this study, tumour localisation for ER scoring was evaluated comparing computer-generated segmentation masks with those of two specialist breast pathologists. Automatically and manually obtained segmentation masks were used to obtain IHC scores for thirty-two ER-stained invasive breast cancer TMA samples using FDA-approved IHC scoring software. RESULTS: Although pixel-level comparisons showed lower agreement between automated and manual segmentation masks (κ=0.81) than between pathologists' masks (κ=0.91), this had little impact on computed IHC scores (Allred; [Image: see text]=0.91, Quickscore; [Image: see text]=0.92). CONCLUSIONS: The proposed automated system provides consistent measurements thus ensuring standardisation, and shows promise for increasing IHC analysis of nuclear staining in TMAs from large clinical trials

    Capturing Global Spatial Context for Accurate Cell Classification in Skin Cancer Histology

    Get PDF
    The spectacular response observed in clinical trials of immunotherapy in patients with previously uncurable Melanoma, a highly aggressive form of skin cancer, calls for a better understanding of the cancer-immune interface. Computational pathology provides a unique opportunity to spatially dissect such interface on digitised pathological slides. Accurate cellular classification is a key to ensure meaningful results, but is often challenging even with state-of-art machine learning and deep learning methods. We propose a hierarchical framework, which mirrors the way pathologists perceive tumour architecture and define tumour heterogeneity to improve cell classification methods that rely solely on cell nuclei morphology. The SLIC superpixel algorithm was used to segment and classify tumour regions in low resolution H&E-stained histological images of melanoma skin cancer to provide a global context. Classification of superpixels into tumour, stroma, epidermis and lumen/white space, yielded a 97.7% training set accuracy and 95.7% testing set accuracy in 58 whole-tumour images of the TCGA melanoma dataset. The superpixel classification was projected down to high resolution images to enhance the performance of a single cell classifier, based on cell nuclear morphological features, and resulted in increasing its accuracy from 86.4% to 91.6%. Furthermore, a voting scheme was proposed to use global context as biological a priori knowledge, pushing the accuracy further to 92.8%. This study demonstrates how using the global spatial context can accurately characterise the tumour microenvironment and allow us to extend significantly beyond single-cell morphological classification.Comment: Accepted by MICCAI COMPAY 2018 worksho

    Segmentation of epidermal tissue with histopathological damage in images of haematoxylin and eosin stained human skin.

    Get PDF
    Background: Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage.Methods: A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm's robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods.Results: Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage.Conclusions: Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues

    The significance of tumour microarchitectural features in breast cancer prognosis: a digital image analysis

    Get PDF
    BACKGROUND: As only a minor portion of the information present in histological sections is accessible by eye, recognition and quantification of complex patterns and relationships among constituents relies on digital image analysis. In this study, our working hypothesis was that, with the application of digital image analysis technology, visually unquantifiable breast cancer microarchitectural features can be rigorously assessed and tested as prognostic parameters for invasive breast carcinoma of no special type. METHODS: Digital image analysis was performed using public domain software (ImageJ) on tissue microarrays from a cohort of 696 patients, and validated with a commercial platform (Visiopharm). Quantified features included elements defining tumour microarchitecture, with emphasis on the extent of tumour-stroma interface. The differential prognostic impact of tumour nest microarchitecture in the four immunohistochemical surrogates for molecular classification was analysed. Prognostic parameters included axillary lymph node status, breast cancer-specific survival, and time to distant metastasis. Associations of each feature with prognostic parameters were assessed using logistic regression and Cox proportional models adjusting for age at diagnosis, grade, and tumour size. RESULTS: An arrangement in numerous small nests was associated with axillary lymph node involvement. The association was stronger in luminal tumours (odds ratio (OR) = 1.39, p = 0.003 for a 1-SD increase in nest number, OR = 0.75, p = 0.006 for mean nest area). Nest number was also associated with survival (hazard ratio (HR) = 1.15, p = 0.027), but total nest perimeter was the parameter most significantly associated with survival in luminal tumours (HR = 1.26, p = 0.005). In the relatively small cohort of triple-negative tumours, mean circularity showed association with time to distant metastasis (HR = 1.71, p = 0.027) and survival (HR = 1.8, p = 0.02). CONCLUSIONS: We propose that tumour arrangement in few large nests indicates a decreased metastatic potential. By contrast, organisation in numerous small nests provides the tumour with increased metastatic potential to regional lymph nodes. An outstretched pattern in small nests bestows tumours with a tendency for decreased breast cancer-specific survival. Although further validation studies are required before the argument for routine quantification of microarchitectural features is established, our approach is consistent with the demand for cost-effective methods for triaging breast cancer patients that are more likely to benefit from chemotherapy

    Multimodal microscopy for automated histologic analysis of prostate cancer

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Prostate cancer is the single most prevalent cancer in US men whose gold standard of diagnosis is histologic assessment of biopsies. Manual assessment of stained tissue of all biopsies limits speed and accuracy in clinical practice and research of prostate cancer diagnosis. We sought to develop a fully-automated multimodal microscopy method to distinguish cancerous from non-cancerous tissue samples.</p> <p>Methods</p> <p>We recorded chemical data from an unstained tissue microarray (TMA) using Fourier transform infrared (FT-IR) spectroscopic imaging. Using pattern recognition, we identified epithelial cells without user input. We fused the cell type information with the corresponding stained images commonly used in clinical practice. Extracted morphological features, optimized by two-stage feature selection method using a minimum-redundancy-maximal-relevance (mRMR) criterion and sequential floating forward selection (SFFS), were applied to classify tissue samples as cancer or non-cancer.</p> <p>Results</p> <p>We achieved high accuracy (area under ROC curve (AUC) >0.97) in cross-validations on each of two data sets that were stained under different conditions. When the classifier was trained on one data set and tested on the other data set, an AUC value of ~0.95 was observed. In the absence of IR data, the performance of the same classification system dropped for both data sets and between data sets.</p> <p>Conclusions</p> <p>We were able to achieve very effective fusion of the information from two different images that provide very different types of data with different characteristics. The method is entirely transparent to a user and does not involve any adjustment or decision-making based on spectral data. By combining the IR and optical data, we achieved high accurate classification.</p
    • …
    corecore